21 research outputs found

    leave a trace - A People Tracking System Meets Anomaly Detection

    Full text link
    Video surveillance always had a negative connotation, among others because of the loss of privacy and because it may not automatically increase public safety. If it was able to detect atypical (i.e. dangerous) situations in real time, autonomously and anonymously, this could change. A prerequisite for this is a reliable automatic detection of possibly dangerous situations from video data. This is done classically by object extraction and tracking. From the derived trajectories, we then want to determine dangerous situations by detecting atypical trajectories. However, due to ethical considerations it is better to develop such a system on data without people being threatened or even harmed, plus with having them know that there is such a tracking system installed. Another important point is that these situations do not occur very often in real, public CCTV areas and may be captured properly even less. In the artistic project leave a trace the tracked objects, people in an atrium of a institutional building, become actor and thus part of the installation. Visualisation in real-time allows interaction by these actors, which in turn creates many atypical interaction situations on which we can develop our situation detection. The data set has evolved over three years and hence, is huge. In this article we describe the tracking system and several approaches for the detection of atypical trajectories

    Eine rekonfigurierbare Architektur fĂŒr die Echtzeit-Bilddatenkompression an Bord von Satelliten

    No full text
    Data products of optical remote sensing systems are increasingly used in many areas of our everyday life. The spatial as well as the spectral resolution of satellite image data increases steadily with new missions resulting in a higher precision of known procedures and new application scenarios. While the memory capacity requirements can still be fulfilled, the transmission capacity becomes increasingly problematic. Real-time transmission of high-resolution image data is currently not possible. This thesis presents a new image data compression architecture that can be used for current and future projects at the German Aerospace Center (DLR). The architecture has region-of-interest support and offers flexible access to the compressed data based on the CCSDS 122.0-B-1 image data compression standard. Region of interest (ROI) coding can be useful in scenarios where on-board classification, registration, or object or change detection algorithms are used. It is also useful to decrease the amount of data that must be transferred to the ground station. Modifications to the standard have been made to permit a change of compression parameters and the re-organization of the bit-stream after compression. An additional index of the compressed data is created, which makes it possible to locate individual parts of the bit-stream. On request, compressed and stored images can be re-assembled and transmitted according to the application’s needs and as requested by the ground station. The requirements, the design of the architecture, and its implementation based on reconfigurable hardware are presented. The architecture was developed for a spacequalified Xilinx Virtex 5QV, where a single instance of the architecture is capable of compressing images at a rate of up to 200 Mpx/s (or 400 Mbyte/s for 16 bit images). It operates at a clock frequency of 100MHz and processes two image pixels per clock cycle. A Xilinx Virtex-5QV enables thereby compressing images with a width of up to 4096 pixels without the use of external memory. Without external memory and additional interfaces, the power consumption of the architecture is about 5 W. The proposed architecture is one of the fastest implementations yet reported and sufficient for recent high-resolution systems. Investigations in the resource and power consumption, as well as the availability of external storage have shown that it should be possible to integrate the design directly on a focal plane.Datenprodukte von optischen Fernerkundungssystemen finden zunehmend Anwendung in unserem tĂ€glichen Leben. Die rĂ€umliche als auch die spektrale Auflösung von Satellitenbilddaten steigt mit neuen Missionen stetig, was zu einer höheren PrĂ€zision der bekannten Verfahren sowie neuen Anwendungsszenarien fĂŒhrt. WĂ€hrend die Anforderungen an die SpeicherkapazitĂ€t noch erfĂŒllt werden können, wird die ÜbertragungskapazitĂ€t zunehmend problematisch. Eine Echtzeit-Übertragung hochaufgelöster Bilddaten ist derzeit nicht möglich. Diese Arbeit stellt eine neue Architektur zur Bilddatenkompression vor, welche fĂŒr aktuelle und zukĂŒnftige Projekte am Deutschen Zentrum fĂŒr Luft- und Raumfahrt (DLR) verwendet werden kann. Die Architektur besitzt eine UnterstĂŒtzung fĂŒr so genannte Regions of Interest und ermöglicht einen flexiblen Zugriff auf die mit dem Standard CCSDS 122.0-B-1 komprimierten Bilddaten. Eine Region of Interest-Kodierung kann bei Anwendungen nĂŒtzlich sein, bei denen Anbord-Klassifizierung oder -Registrierung, sowie Objekt- oder Änderungsdetektionsalgorithmen eingesetzt werden. Sie ist auch nĂŒtzlich, um die Datenmenge, welche an die Bodenstation ĂŒbertragen werden muss, zu reduzieren. Erweiterungen des Standards wurden vorgenommen, um eine VerĂ€nderung der Kompressionsparameter und die Neuorganisation des Bitstroms nach der Kompression zu ermöglichen. Es wird zusĂ€tzlich ein Index der komprimierten Daten erstellt, welcher es ermöglicht, einzelne Teile des Bitstroms zu lokalisieren. Auf Wunsch können die komprimierten und gespeicherten Bilder nach den Anforderungen der Anwendung und wie von der Bodenstation angefordert, zusammengesetzt und ĂŒbertragen werden. Die Anforderungen, das Design einer Architektur sowie deren Implementierung auf Basis von rekonfigurierbarer Hardware werden vorgestellt. Die Architektur wurde fĂŒr einen raumfahrt-qualifizierten Xilinx Virtex 5QV entwickelt, wobei eine einzelne Instanz der Architektur in der Lage ist, Bilder mit einer Rate von bis zu 200 Mpx/s zu komprimieren. Sie arbeitet mit einer Taktfrequenz von 100MHz und prozessiert dabei zwei Bildpunkte pro Taktzyklus. Ein Xilinx Virtex-5QV ermöglicht dabei die Komprimierung von Bildern mit einer Breite von bis zu 4096 Bildpunkten ohne die Verwendung von externem Speicher. Ohne externen Speicher und zusĂ€tzliche Schnittstellen liegt die Leistungsaufnahme der Architektur bei etwa 4 W. Bei der vorgestellten Architektur handelt es sich um eine der schnellsten Implementierungen, die bisher existieren, welche zudem fĂŒr aktuelle hochauflösende Systeme geeignet ist. Untersuchungen des Ressourcen- und Energieverbrauchs, sowie bei der VerfĂŒgbarkeit externer Speicher haben gezeigt, dass es möglich sein sollte, das Design sensornah direkt auf einer Fokalebene zu integrieren

    About the impact of CCSDS image data compression on Image Quality

    No full text
    Zusammenfassung: Zur Bewertung der GĂŒte eines verlustbehafteten Bildkompressionsverfahrens werden hĂ€ufig einfache QualitĂ€tsmaße wie PSNR oder der besser an das menschliche Auge angepasste MSSIM-Index verwendet. Diese QualitĂ€tsmaße quantifizieren den durch eine verlustbehaftete Kompression entstandenen QualitĂ€tsverlust. FĂŒr die Fernerkundung eignen sich diese Verfahren jedoch nicht, da Fernerkundungsbilder zunehmend automatisch und mittlerweile weniger hĂ€ufig durch den Menschen ausgewertet werden. Zur Bewertung der BildqualitĂ€t eines optischen Systems in der Fernerkundung werden u.a. die MTF und das SNR des Systems untersucht. In dieser Arbeit wird der Einfluss aktueller verlustbehafteter Kompressionsverfahren mit Schwerpunkt auf dem Verfahren CCSDS-IDC auf die BildqualitĂ€t in untersucht. Werden die Bilddaten automatisch verarbeitet, so ist die BildqualitĂ€t letztlich hochgradig abhĂ€ngig von den eingesetzten Algorithmen. Es wird gezeigt, dass eine verlustbehaftete Kompression selbst bei hohen Anforderungen an die BildqualitĂ€t möglich ist und diese Algorithmen bei geeigneter Wahl der Kompressionsparameter nicht negativ beeinflusst werden

    A New Real-Time Architecture for Image Compression onboard Satellites based on CCSDS Image Data Compression

    Get PDF
    Remote sensing sensors are used in various applications from Earth sciences, archeology, intelligence, change detection or for planetary research and astronomy. Disaster management after floodings or earthquakes, detection of environmental pollutions or fire detection are examples of countless number of applications. The spatial as well as the spectral resolution of satellite image data increases steadily with new technologies and user requirements resulting in higher precision and new application scenarios. In the future, it will be possible to derive real-time application-specific information from the image on-board the satellite also based on high-resolution images. On the technical side, there is a tremendous increase in data rate that has to be handled by such systems. While the memory capacity requirements can still be fulfilled, the transmission capability becomes increasingly problematic. In this paper, an image compression architecture with region-of-interest support and with flexible access to the compressed data based on the CCSDS 122.0-B-1 image data compression standard is presented. Modifications to the standard permit a change of compression parameters and the re-organization of the bit-stream after compression. An additional index of the compressed data is created, which makes it possible to locate individual parts of the bit-stream. On request, stored images can be re-assembled according the application’s needs and as requested by the ground station. Interactive transmission of the compressed data is possible so that overview images can be transmitted first followed by detailed information for the regions of interest (ROIs)

    Reconfigurable architecture for real-time image compression on-board satellites

    No full text
    A high-speed image compression architecture with region-of-interest (ROI) support and with flexible access to compressed data based on the Consultative Committee for Space Data Systems 122.0-B-1 image data compression standard is presented. Modifications of the standard permit a change of compression parameters and the reorganization of the bit stream after compression. An additional index of the compressed data is created, which renders it possible to locate individual parts of the bit stream. On request, stored images can be reassembled according to the application’s needs and as requested by the ground station. Interactive transmission of the compressed data is possible such that overview images can be transmitted first followed by detailed information for the ROI. The architecture was implemented for a Xilinx Virtex-5QV and a single instance is able to compress images at a rate of 200  Mpx/s at a clock frequency of 100 MHz. The design ensures that all parts of the system have a high utilization and parallelism. A Virtex-5QV allows compression of images with a width of up to 4096 px without external memory. The power consumption of the architecture is ∌4  W. This example is one of the fastest implementations yet reported and sufficient for future high-resolution imaging systems

    High performance CCSDS image data compression using GPGPUs for space applications

    Get PDF
    The usage of graphics processing units (GPUs) as computing architectures for inherently data parallel signal processing applications in this computing era is very popular. In principle, GPUs in comparison with central processing units (CPUs) could achieve significant speed-up over the latter, especially considering data parallel applications which expect high throughput. The paper investigates the usage of GPUs for running space borne image data compression algorithms, in particular the CCSDS 122.0-B-1 standard as a case study. The paper proposes an architecture to parallelize the Bit-Plane Encoder (BPE) stage of the CCSDS 122.0-B-1 in lossless mode using a GPU to achieve high throughput performance to facilitate real-time compression of satellite image data streams. Experimental results are furnished by comparing the performance in terms of compression time of the GPU implementation versus a state of the art single threaded CPU and an field-programmable gate array (FPGA implementation. The GPU implementation on a NVIDIA¼ GeForce¼ GTX 670 achieves a peak throughput performance of 162.382 Mbyte/s (932.288 Mbit/s) and an average speed-up of at least compared to the software implementation running on a 3.47 GHz single core Intel¼ Xeonℱ processor. The high throughput CUDA implementation using GPUs could potentially be suitable for air borne and space borne applications in the future, if the GPU technology evolves to become radiation-tolerant and space-qualified

    Traffic Observation and Situation Assessment

    No full text
    Utilization of camera systems for surveillance tasks (e. g. traffic monitoring) has become a standard procedure and has been in use for over 20 years. However, most of the cameras are operated locally and data analyzed manually. Locally means here a limited field of view and that the image sequences are processed independently from other cameras. For the enlargement of the observation area and to avoid occlusions and non-accessible areas multiple camera systems with overlapping and non-overlapping cameras are used. The joint processing of image sequences of a multi-camera system is a scientific and technical challenge. The processing is divided traditionally into camera calibration, object detection, tracking and interpretation. The fusion of information from different cameras is carried out in the world coordinate system. To reduce the network load, a distributed processing concept can be implemented. Object detection and tracking are fundamental image processing tasks for scene evaluation. Situation assessments are based mainly on characteristic local movement patterns (e.g. directions and speed), from which trajectories are derived. It is possible to recognize atypical movement patterns of each detected object by comparing local properties of the trajectories. Interaction of different objects can also be predicted with an additional classification algorithm. This presentation discusses trajectory based recognition algorithms for atypical event detection in multi object scenes to obtain area based types of information (e.g. maps of speed patterns, trajectory curvatures or erratic movements) and shows that two-dimensional areal data analysis of moving objects with multiple cameras offers new possibilities for situational analysis

    Calibration and epipolar geometry of generic heterogenous camera systems

    No full text
    The application of perspective camera systems in photogrammetry and computer vision is state of the art. In recent years nonperspective and especially omnidirectional camera systems were increasingly used in close-range photogrammetry tasks. In general perspective camera model, i. e. pinhole model, cannot be applied when using non-perspective camera systems. However, several camera models for different omnidirectional camera systems are proposed in literature. Using different types of cameras in a heterogeneous camera system may lead to an advantageous combination. The advantages of different camera systems, e. g. field of view and resolution, result in a new enhanced camera system. If these different kinds of cameras can be modeled, using a unified camera model, the total calibration process can be simplified. Sometimes it is not possible to give the specific camera model in advance. In these cases a generic approach is helpful. Furthermore, a simple stereo reconstruction becomes possible using a fisheye and a perspective camera for example. In this paper camera models for perspective, wide-angle and omnidirectional camera systems are evaluated. The crucial initialization of the model’s parameters is conducted using a generic method that is independent of the particular camera system. The accuracy of this generic camera calibration approach is evaluated by calibration of a dozen of real camera systems. It will be shown, that a unified method of modeling, parameter approximation and calibration of interior and exterior orientation can be applied to derive 3D object data
    corecore